5,718 research outputs found

    Anti-telephones in transformation optics: metamaterials with closed null geodesics

    Full text link
    We apply the methods of transformation optics to theoretical descriptions of spacetimes that support closed null geodesic curves. The metric used is based on frame dragging spacetimes, such as the van Stockum dust or the Kerr black hole. Through transformation optics, this metric is analogous to a material that in theory should allow for communication between past and future. Presented herein is a derivation and description of the spacetime and the resulting permeability, permittivity, and magneto-electric couplings that a material would need in order for light in the material to follow closed null geodesics. We also address the paradoxical implications of such a material, and demonstrate why such a material would not actually result in a violation of causality. A full derivation of the Plebanski equations is also included.Comment: Submitted to Phys Rev D. This is a pre-print, uploaded here in hopes of wider review and community feedbac

    Astrophysically robust systematics removal using variational inference: application to the first month of Kepler data

    Full text link
    Space-based transit search missions such as Kepler are collecting large numbers of stellar light curves of unprecedented photometric precision and time coverage. However, before this scientific goldmine can be exploited fully, the data must be cleaned of instrumental artefacts. We present a new method to correct common-mode systematics in large ensembles of very high precision light curves. It is based on a Bayesian linear basis model and uses shrinkage priors for robustness, variational inference for speed, and a de-noising step based on empirical mode decomposition to prevent the introduction of spurious noise into the corrected light curves. After demonstrating the performance of our method on a synthetic dataset, we apply it to the first month of Kepler data. We compare the results, which are publicly available, to the output of the Kepler pipeline's pre-search data conditioning, and show that the two generally give similar results, but the light curves corrected using our approach have lower scatter, on average, on both long and short timescales. We finish by discussing some limitations of our method and outlining some avenues for further development. The trend-corrected data produced by our approach are publicly available.Comment: 15 pages, 13 figures, accepted for publication in MNRA

    Efficient state-space inference of periodic latent force models

    Get PDF
    Latent force models (LFM) are principled approaches to incorporating solutions to differen-tial equations within non-parametric inference methods. Unfortunately, the developmentand application of LFMs can be inhibited by their computational cost, especially whenclosed-form solutions for the LFM are unavailable, as is the case in many real world prob-lems where these latent forces exhibit periodic behaviour. Given this, we develop a newsparse representation of LFMs which considerably improves their computational efficiency,as well as broadening their applicability, in a principled way, to domains with periodic ornear periodic latent forces. Our approach uses a linear basis model to approximate onegenerative model for each periodic force. We assume that the latent forces are generatedfrom Gaussian process priors and develop a linear basis model which fully expresses thesepriors. We apply our approach to model the thermal dynamics of domestic buildings andshow that it is effective at predicting day-ahead temperatures within the homes. We alsoapply our approach within queueing theory in which quasi-periodic arrival rates are mod-elled as latent forces. In both cases, we demonstrate that our approach can be implemented efficiently using state-space methods which encode the linear dynamic systems via LFMs.Further, we show that state estimates obtained using periodic latent force models can re-duce the root mean squared error to 17% of that from non-periodic models and 27% of thenearest rival approach which is the resonator model (S ̈arkk ̈a et al., 2012; Hartikainen et al.,2012.

    On Similarities between Inference in Game Theory and Machine Learning

    No full text
    In this paper, we elucidate the equivalence between inference in game theory and machine learning. Our aim in so doing is to establish an equivalent vocabulary between the two domains so as to facilitate developments at the intersection of both fields, and as proof of the usefulness of this approach, we use recent developments in each field to make useful improvements to the other. More specifically, we consider the analogies between smooth best responses in fictitious play and Bayesian inference methods. Initially, we use these insights to develop and demonstrate an improved algorithm for learning in games based on probabilistic moderation. That is, by integrating over the distribution of opponent strategies (a Bayesian approach within machine learning) rather than taking a simple empirical average (the approach used in standard fictitious play) we derive a novel moderated fictitious play algorithm and show that it is more likely than standard fictitious play to converge to a payoff-dominant but risk-dominated Nash equilibrium in a simple coordination game. Furthermore we consider the converse case, and show how insights from game theory can be used to derive two improved mean field variational learning algorithms. We first show that the standard update rule of mean field variational learning is analogous to a Cournot adjustment within game theory. By analogy with fictitious play, we then suggest an improved update rule, and show that this results in fictitious variational play, an improved mean field variational learning algorithm that exhibits better convergence in highly or strongly connected graphical models. Second, we use a recent advance in fictitious play, namely dynamic fictitious play, to derive a derivative action variational learning algorithm, that exhibits superior convergence properties on a canonical machine learning problem (clustering a mixture distribution)

    Blended E85-diesel fuel droplet heating and evaporation

    Get PDF
    The multidimensional quasi-discrete (MDQD) model is applied to the analysis of heating and evaporation of mixtures of E85 (85 vol % ethanol and 15 vol % gasoline) with diesel fuel, commonly known as “E85–diesel” blends, using the universal quasi-chemical functional group activity coefficients model for the calculation of vapor pressure. The contribution of 119 components of E85–diesel fuel blends is taken into account, but replaced with smaller number of components/quasi-components, under conditions representative of diesel engines. Our results show that high fractions of E85–diesel fuel blends have a significant impact on the evolutions of droplet radii and surface temperatures. For instance, droplet lifetime and surface temperature for a blend of 50 vol % E85 and 50 vol % diesel are 23.2% and up to 3.4% less than those of pure diesel fuel, respectively. The application of the MDQD model has improved the computational efficiency significantly with minimal sacrifice to accuracy. This approach leads to a saving of up to 86.4% of CPU time when reducing the 119 components to 16 components/quasi-components without a sacrifice to the main features of the model

    Analyses of Kings Creek Water and Watershed Runoff Samples for Bacteroidales using qPCR to Detect Human Fecal Contamination

    Get PDF
    The purpose of this work was to evaluate and analyze water samples collected from the Kings Creek watershed using a qPCR-based method to detect both total Bacteroidales and Bacteroidales reported to be associated with human fecal contamination. Quantitative real-time PCR assays were used to significantly reduce processing times and at the same time yield estimates of target concentrations. Initial efforts focused on evaluation of various Bacteroidales primer sets reported in the literature tested against human and animal fecal samples collected from the Kings Creek watershed. Most samples, both animal and human, were positive with the universal (i.e. general or total) Bacteroidales assay. Strong positive signals were found with human sewage using the humanspecific assay that was chosen for this study. Most animal scat samples were negative with respect to the human-specific Bacteroidales indicator. The few animal samples that were positive with the human-specific assay had very low signal intensity. Despite the generally pervasive drought conditions during this study, evidence of human contamination was detected at certain feeder stream locations, and was widespread after a significant rain event that occurred in late fall. Use of the human-specific Bacteroidales indicator holds promise as a tool to identify potential human, as opposed to animal, sources of contamination but will require a more comprehensive field monitoring and sample collection effort than could be managed in this preliminary study

    Ultrafast-Contactless Flash Sintering using Plasma Electrodes

    Get PDF
    This paper presents a novel derivative of flash sintering, in which contactless flash sintering (CFS) is achieved using plasma electrodes. In this setup, electrical contact with the sample to be sintered is made by two arc plasma electrodes, one on either side, allowing current to pass through the sample. This opens up the possibility of continuous throughput flash sintering. Preheating, a usual precondition for flash sintering, is provided by the arc electrodes which heat the sample to 1400 °C. The best results were produced with pre-compacted samples (bars 1.8 mm thick) of pure B(4)C (discharge time 2s, current 4A) and SiC:B(4)C 50 wt% (3s at 6A), which were fully consolidated under a heating rate approaching 20000 °C/min. For the composite a cylindrical volume of 14 mm(3) was sintered to full density with limited grain growth

    The relationship of strategy, fit, productivity, and business performance in a services setting

    Full text link
    In their review of the operations strategy literature, Anderson et al. [Anderson, J.C., Cleveland, G., Schroeder, R.G., 1989. Operations strategy: a literature review. J. Operations Manage., 8(2): 133‐158] contend that the hypothesis that a company will perform better if it links its operations strategy to the business strategy is intuitively appealing, but lacks empirical verification. In light of this contention, this research attempts to: (1) define and measure the concept of fit as it applies to operations strategy; (2) show how fit leads to better performance; and (3) investigate the interrelationships between fit, business strategy, productivity, and performance. These objectives are investigated through field‐based research within a wholesale distribution service setting. Utilizing the classificatory framework of Venkatraman [Venkatraman, N., 1989. The concept of fit in strategy research: toward verbal and statistical correspondence. Acad. Manage. Rev., 14(3): 423‐444], fit is defined as the degree to which operational elements match the business strategy. This precise definition closely resembles the concept of ‘external fit’ that began with the work of Skinner [Skinner, W., 1969. Manufacturing–missing link in corporate strategy. Harvard Bus. Rev., 47(3): 136‐145]. A conceptual model of business performance is used with productivity as a mediating variable between the independent variables of business strategy and external fit and the dependent variable of business performance. Path analysis is used to analyze the effect of external fit on performance and to investigate the interrelationships between fit, business strategy, productivity, and performance. The results show that external fit has a significant positive and direct effect on business performance. When coupled with the nonsignificant direct effects of the strategy variables, this suggests that the fit of the operational elements with the strategy is of greater importance than the particular choice of strategy. Although all three business strategies (low cost, a combination of low cost and high customer service, and high customer service) had no significant direct effects on performance, a high customer service strategy did have a significant positive effect on the intervening productivity variable. Finally, the particular design of the research and the findings suggest that much of the conceptual work in operations strategy may be applicable to service operations as well as manufacturing.Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146853/1/joom145.pd

    A geospatiotemporal and causal inference epidemiological exploration of substance and cannabinoid exposure as drivers of rising US pediatric cancer rates

    Get PDF
    Background: Age-adjusted US total pediatric cancer incidence rates (TPCIR) rose 49% 1975–2015 for unknown reasons. Prenatal cannabis exposure has been linked with several pediatric cancers which together comprise the majority of pediatric cancer types. We investigated whether cannabis use was related spatiotemporally and causally to TPCIR. Methods: State-based age-adjusted TPCIR data was taken from the CDC Surveillance, Epidemiology and End Results cancer database 2003–2017. Drug exposure was taken from the nationally-representative National Survey of Drug Use and Health, response rate 74.1%. Drugs included were: tobacco, alcohol, cannabis, opioid analgesics and cocaine. This was supplemented by cannabinoid concentration data from the Drug Enforcement Agency and ethnicity and median household income data from US Census. Results: TPCIR rose while all drug use nationally fell, except for cannabis which rose. TPCIR in the highest cannabis use quintile was greater than in the lowest (β-estimate = 1.31 (95%C.I. 0.82, 1.80), P = 1.80 × 10− 7) and the time:highest two quintiles interaction was significant (β-estimate = 0.1395 (0.82, 1.80), P = 1.00 × 10− 14). In robust inverse probability weighted additive regression models cannabis was independently associated with TPCIR (β-estimate = 9.55 (3.95, 15.15), P = 0.0016). In interactive geospatiotemporal models including all drug, ethnic and income variables cannabis use was independently significant (β-estimate = 45.67 (18.77, 72.56), P = 0.0009). In geospatial models temporally lagged to 1,2,4 and 6 years interactive terms including cannabis were significant. Cannabis interactive terms at one and two degrees of spatial lagging were significant (from β-estimate = 3954.04 (1565.01, 6343.09), P = 0.0012). The interaction between the cannabinoids THC and cannabigerol was significant at zero, 2 and 6 years lag (from β-estimate = 46.22 (30.06, 62.38), P = 2.10 × 10− 8). Cannabis legalization was associated with higher TPCIR (β-estimate = 1.51 (0.68, 2.35), P = 0.0004) and cannabis-liberal regimes were associated with higher time:TPCIR interaction (β-estimate = 1.87 × 10− 4, (2.9 × 10− 5, 2.45 × 10− 4), P = 0.0208). 33/56 minimum e-Values were \u3e 5 and 6 were infinite. Conclusion: Data confirm a close relationship across space and lagged time between cannabis and TPCIR which was robust to adjustment, supported by inverse probability weighting procedures and accompanied by high e-Values making confounding unlikely and establishing the causal relationship. Cannabis-liberal jurisdictions were associated with higher rates of TPCIR and a faster rate of TPCIR increase. Data inform the broader general consideration of cannabinoid-induced genotoxicity
    corecore